#64K context12/08/2025
Zhipu AI Unveils GLM-4.5V: Open-Source Multimodal Model with 64K Context and Tunable Thinking Mode
'Zhipu AI released GLM-4.5V, an open-source vision-language model that combines a 106B parameter MoE backbone with 12B active parameters, 64K token context and a tunable Thinking Mode for advanced multimodal reasoning.'